
Geoffrey Hinton Warns of the “Existential Threat” of AI
Clip: 5/9/2023 | 17m 54sVideo has Closed Captions
Geoffrey Hinton joins the show.
Geoffrey Hinton is considered the godfather of Artificial Intelligence and made headlines with his recent departure from Google. He quit to speak freely and raise awareness of the risks of AI. To dive deeper into the dangers and how to manage them, he joins Hari Sreenivasan.
Problems with Closed Captions? Closed Captioning Feedback
Problems with Closed Captions? Closed Captioning Feedback

Geoffrey Hinton Warns of the “Existential Threat” of AI
Clip: 5/9/2023 | 17m 54sVideo has Closed Captions
Geoffrey Hinton is considered the godfather of Artificial Intelligence and made headlines with his recent departure from Google. He quit to speak freely and raise awareness of the risks of AI. To dive deeper into the dangers and how to manage them, he joins Hari Sreenivasan.
Problems with Closed Captions? Closed Captioning Feedback
How to Watch Amanpour and Company
Amanpour and Company is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.

Watch Amanpour and Company on PBS
PBS and WNET, in collaboration with CNN, launched Amanpour and Company in September 2018. The series features wide-ranging, in-depth conversations with global thought leaders and cultural influencers on issues impacting the world each day, from politics, business, technology and arts, to science and sports.Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipChristiane: Our next guest believes the threat of A.I.
might be even more urgent than climate change, if you can imagine that.
Geoffrey Hinton is considered the godfather of A.I.
and made headlines with his recent departure from Google.
To dive deeper into the dangers of A.I.
and how to manage them, he's joining Hari Sreenivasan now.
Reporter: Thank you so much, Christiane.
You are one of the more celebrated names in artificial intelligence.
You have been working at this for more than 40 years and I wonder, as you thought about how computers learn, did it go the way you thought it would when you started in this field?
>> It did until recently.
I thought if we built computer models of the brain learns, we would understand more how the brain learns.
All that was going on very well and then very suddenly, I realized recently that maybe the digital intelligences we were building on computers were actually learning better than the brain, and that changed my mind after about 50 years of thinking we would make better digital intelligences by making them more like the brain.
Reporter: This is something you and your colleagues must have been thinking about over these years.
Was there a tipping point?
>> There were several ingredients to it.
A year or two ago, I used a Google system called palm--that was a big chat box, and it couldn't explain why jokes were funny.
I've been using that as a litmus test of whether or not these things could understand what was going on.
I was shocked when they knew why they were funny.
Things like ChatGPT -- know thousands of times more than any human in basic common sense knowledge, but they have about a trillion connection strengths in their artificial neuron nets, and we have about 100 trillion connection strengths in the brain so with 100th as much storage capacity, they knew thousands of times more than us and that strongly suggests that it's got a better way of getting information into the connections.
And the third thing was, a couple of months ago, I suddenly became convinced that the brain wasn't using as good a learning algorithm as these digital intelligence.
Because brains can't exchange information really fast and these digital intelligences can.
I can have one model running on 10,000 different bits of hardware.
It's got the same connection strengths to every copy.
Every agent running on the different hardware can all learn from different bits of data but they can communicate to each other what they learned just by copying the weights because they all work identically, and brains aren't like that.
So they can communicate at trillions of bits a second.
And we can communicate at hundreds of bits a second.
It's such a huge difference.
Reporter: Some who may not have been following what's been happening with A.I.
and ChatGPT and Google's product Bard, explain what those are.
Some have explained it as the autocomplete feature, finishing your thought for you.
But what are these artificial intelligences doing?
>> It's difficult to explain, but I'll do my best.
It's true in a sense they're autocomplete, but if you think about it, if you want to do really good autocomplete, you need to understand what someone is saying and they've learned what you're saying just by doing autocomplete, but they do now seem to really understand.
The way they understand isn't at all like people in A.I.
50 years ago thought it would be.
In old-fashioned A.I., people thought you'd have internal symbolic expressions, a bit like whole sentences in your head and then be able to infer new sentences from old sentences, and it's nothing like that.
It's completely different.
And let me give you a sense of how much different it is.
I can give you a problem that doesn't make any sense in logic.
You know the answer intuitively, and these big models are really models of human intuition.
So suppose I tell you that there are male and female cats and male dogs and female dogs.
Suppose I tell you you have to make a choice.
Either you're going to have all cats male and all dogs female or all cats female and all dogs male.
You know it's biological nonsense, but you also know it's much more natural to make all cats female and all dogs male.
That's not a question of logic.
You have a big pattern of neural activity that represents cat and one that represents man and woman.
And the pattern for cat is more like the pattern for woman than it is the pattern for man.
That's the results of a lot of learning about men and women and cats and dogs.
But it's now intuitively obvious to you that cats are more like women and dogs are more like men because of these big patterns of neural activity you've learned.
You didn't have to do reasoning, it's just obvious.
That's how these things are working.
They're learning these big patterns of activity to represent things.
That makes all sorts of things just obvious to them.
Reporter: Ideas like intuition and basically context, those are the things that scientists and researchers always said, This is why we're fairly positive that we're not going to head to that "Terminator" scenario where the artificial intelligence gets smarter than human beings but what you're describing is -- these are almost consciousness, sort of emotional-level decision processes.
>> OK, I think if you bring sentience into it...it just clouds the issue so lots of people are confident these things aren't sentient yet.
And if you ask them what they mean by sentient, they don't know.
I don't understand how they're so confident they're not sentient if they don't know what the mean by "sentient."
Suppose I'm talking to a chat bot, and I suddenly realize it's telling me all sorts of things I don't want to know, like it's rushing out responses about somebody called Beyonce, who I'm not interested in because I'm an old white male.
I realize it suddenly thinks I'm a female girl.
If I were to ask it am I a teenage girl, it would say yes.
If I looked at the history of our conversation, I'd probably be able to say why it thinks I'm a teenage girl.
I'm using the word think in just the same sense we normally use it in.
It really just thinks that.
Reporter: Give me an idea of why this is such a significant leap forward.
It seems like there are parallel concerns for in the 1980s and 1990s, blue-collar workers were concerned about robots coming in and replacing them and not being able to control them.
And now this is kind after a threat to the white-collar class, people saying there are these bots and agents that can do a lot of things we used to think were things only people can?
>> I think there are a lot of things we need to worry about with these new kinds of digital intelligences.
I'm talking about the existential threat that where they'll become more intelligent than us and get control.
Many other threats, which are also severe.
They include these things, taking away jobs.
In a decent society, that would be great.
It would mean everything got more productive and everyone was better off but the danger is it will make the rich richer and the poor poorer.
That's not A.I.
's fault.
That's how we organize society.
There's dangers about them making it impossible to know what's true by so many fakes out there.
That's something you might address by treating it like counterfeiting.
Governments don't like you printing their money.
It's a serious offense.
It's also a serious offense to deposit to somebody else if you knew it was fake.
I think governments are going to have to create similar regulations for fake videos and fake voices and fake images.
It's going to be hard.
As far as I can see the only way to stop ourselves being swamped by fake images, etc., is to have strong regulation and serious laws.
You have serious consequences if you produce a video by A.I.
and it doesn't say it's made with A.I.
That's what they do with counterfeit money.
I talked to Bernie Sanders last week about it, and he liked that view of it.
Reporter: Can governments and central banks and private banks all agree on certain standards because there's money at stake?
And I wonder, is there enough incentive for governments to sit down together and try to craft some sort of rules of what's acceptable and what's not?
Some sort of Geneva convention or accords?
>> It would be great if governments could say, Look, these fake videos are so good at manipulating the electorate that we need them all marked as fake.
Otherwise we're going to lose democracy.
The problem is that some politicians would like to lose democracy, so that's going to make it hard.
Reporter: So how do you solve for that?
It seems like this genie is out of the bottle.
>> What we're talking about right now is the genie of being swamped with fake news.
Organizations like Cambridge Analytica had an effect by pumping out fake news on Brexit, and it's clear that Facebook was manipulated to have an effect on the 2016 election so the genie is out of the bottle in that sense.
But that's not what I'm talking about.
The main thing I'm talking about is the risk of these things becoming super intelligent and taking over control from us.
I think the existential threat, we're all in the same boat, the Chinese, the Americans, the Europeans.
They all would not like super intelligence to take over from people.
So I think for that existential threat we will get collaboration between all the companies and countries because none of them want the super intelligence to take over.
In that sense that's like global nuclear war, where even during the Cold War, people could collaborate on its prevention because it was not in anybody's interest.
That's one, in a sense, positive thing about this existential threat.
It should be possible to get people to prevent it.
Reporter: One of your more recent employers was Google and you were a V.P.
and a fellow there, and you recently decided to leave the company to be able to speak more freely about A.I.
They just launched their own version, Bard, back in March.
Here we are now.
What do you feel like you can say today, or will say today, that you couldn't a few months ago?
>> Not much, really.
I just wanted to be -- if you work for a company and you're talking to the media, you tend to think, What implications does this have for the company?
At least you ought to think that because they're paying you.
I don't think it's honest to take the money from the company and then completely ignore the company's interests.
But if I don't take the money, I can just say what I think.
It happens to be the case that -- everybody wants to spin the story that I left Google because they were doing bad things.
That's more or less the opposite of the truth.
I think Google is very responsible, and I think having left Google, I can say good things about it and be more credible.
I'm just less constrained.
Reporter: Do you think that tech companies, given that it's mostly their engineering staff trying to work on developing these intelligences, are going to have better opportunities to create the rules of the road than, say, governments or third parties?
>> I do, actually.
There are some places governments have to be involved, like regulations that force you to show whether something was A.I.-generated, but in terms of keeping control of a superintelligence, what you need is the people who are developing it to be doing lots of little experiments with it and seeing what happens as they're developing it and before it's out of control and that's going to be mainly the researchers in companies.
I don't think you can leave it to philosophers to speculate on what might happen.
Everyone who's ever written a computer program and has gotten empirical feedback knows that you quickly disabuses you of your idea that you understood what was going on.
So I agree with people like Sam Altman at Open A.I.
that this stuff is inevitably going to be developed because there are so many good uses of it.
What we need is, as it's being developed, we put a lot of resources into it to try to keep control of it.
Reporter: Back in March there were more than a thousand different folks in the tech industry, people like Steve Wozniak asking to have a six-month pause on the development of artificial intelligence, and you didn't sign that.
How come?
>> I thought it was completely unrealistic.
The point is these are going to be extremely useful for things like medicine, for reading scans accurately and quickly.
They're going to be useful for designing materials to make more efficient solar cells, for example.
They're going to be tremendously useful, they already are for predicting floods and earthquakes.
Going to be tremendously useful in understanding climate change, so they're going to be developed.
There's no way that's going to be stopped.
I thought it was maybe a sensible way of getting media attention but not that sensible a thing to ask.
Not feasible.
What we should be asking for is that comparable resources are put into dealing with the bad possible side-effects dealing with keeping these things under control as we're developing them.
At present, 99% of the money is going into developing them and 1% going into people saying these things might be dangerous.
It should be more like 50-50, I believe.
Reporter: Are you optimistic we'll be able as humanity to rise to this challenge or are you less so?
>> I think we're entering a time of huge uncertainty.
I think one would be foolish to be either optimistic or pessimistic.
We just don't know what's going to happen.
The best we can do is say, Let's put a lot of effort into ensuring that whatever happens is as good as it could have been.
It's possible that there's no way we will control these superintelligence and that humanity is just a passing phase in the evolution of intelligence.
That in a few hundred years, there will be no man.
Just all artificial intelligence.
We just don't know.
Predicting the future is a bit like looking into a fog.
You can see about 100 yards very clearly and then at 200 yards, you can't see anything.
It's like a wall, and I think that wall is about 100 years.
Reporter: Geoffrey, thank you for your time.
Support for PBS provided by: